MMLI: Multimodal Multiperson Corpus of Laughter in Interaction

نویسندگان

  • Radoslaw Niewiadomski
  • Maurizio Mancini
  • Tobias Baur
  • Giovanna Varni
  • Harry J. Griffin
  • M. S. Hane Aung
چکیده

The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains both induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multiple audio and video channels as well as physiological data. In this paper we discuss methodological and technical issues related to this data collection including techniques for laughter elicitation and synchronization between different independent sources of data. We also present the enhanced visualization and segmentation tool used to segment captured data. Finally we present data annotation as well as preliminary results of the analysis of the nonverbal behavior patterns in laughter.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Exploring Sounded and Silent Laughter in Multiparty Task-based and Social Interaction - Audio, Video and Biometric Signals

We report on our explorations of laughter in multiparty spoken interaction. Laughter is universally observed in human interaction. It is multimodal in nature: a stereotyped exhalation from the mouth in conjunction with rhythmic head and body movement. Predominantly occurring in company rather than solo, it is believed to aid social bonding. Spoken interaction is widely studied through corpus an...

متن کامل

Exploring the Body and Head Kinematics of Laughter, Filled Pauses and Breaths

We present ongoing work in the DUEL project, which focuses on the study of disfluencies, exclamations, and laughter in dialogue. Here we focus on the multimodal aspects of disfluent vocalizations, namely laughter and laughed speech, filled pauses, and breathing noises. We exemplify these phenomena in the rich multimodal Dream Apartment Corpus, a natural dialogue corpus, which, in addition to co...

متن کامل

DUEL: A Multi-lingual Multimodal Dialogue Corpus for Disfluency, Exclamations and Laughter

We present the DUEL corpus, consisting of 24 hours of natural, face-to-face, loosely task-directed dialogue in German, French and Mandarin Chinese. The corpus is uniquely positioned as a cross-linguistic, multimodal dialogue resource controlled for domain. DUEL includes audio, video and body tracking data and is transcribed and annotated for disfluency, laughter and exclamations.

متن کامل

Building a Multimodal Laughter Database for Emotion Recognition

Laughter is a significant paralinguistic cue that is largely ignored in multimodal affect analysis. In this work, we investigate how a multimodal laughter corpus can be constructed and annotated both with discrete and dimensional labels of emotions for acted and spontaneous laughter. Professional actors enacted emotions to produce acted clips, while spontaneous laughter was collected from volun...

متن کامل

A tool to elicit and collect multicultural and multimodal laughter

We present the implementation of a data collection tool of multicultural and multi-modal laughter for the 14th Interspeech conference. The application will automatically record and analyze audio and video stream to provide realtime feedback. Using this tool, we expect to collect multimodal cues of different kind of laughers elicited in participants with funny videos, as well as jokes and tongue...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013